8 research outputs found

    Grounding Language to Autonomously-Acquired Skills via Goal Generation

    Get PDF
    We are interested in the autonomous acquisition of repertoires of skills. Language-conditioned reinforcement learning (LC-RL) approaches are great tools in this quest, as they allow to express abstract goals as sets of constraints on the states. However, most LC-RL agents are not autonomous and cannot learn without external instructions and feedback. Besides, their direct language condition cannot account for the goal-directed behavior of pre-verbal infants and strongly limits the expression of behavioral diversity for a given language input. To resolve these issues, we propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB). LGB decouples skill learning and language grounding via an intermediate semantic representation of the world. To showcase the properties of LGB, we present a specific implementation called DECSTR. DECSTR is an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical objects. In a first stage (G -> B), it freely explores its environment and targets self-generated semantic configurations. In a second stage (L -> G), it trains a language-conditioned goal generator to generate semantic goals that match the constraints expressed in language-based inputs. We showcase the additional properties of LGB w.r.t. both an end-to-end LC-RL approach and a similar approach leveraging non-semantic, continuous intermediate representations. Intermediate semantic representations help satisfy language commands in a diversity of ways, enable strategy switching after a failure and facilitate language grounding.Comment: Published at ICLR 202

    Grounding Language to Autonomously-Acquired Skills via Goal Generation

    Get PDF
    International audienceWe are interested in the autonomous acquisition of repertoires of skills. Language-conditioned reinforcement learning (LC-RL) approaches are great tools in this quest, as they allow to express abstract goals as sets of constraints on the states. However, most LC-RL agents are not autonomous and cannot learn without external instructions and feedback. Besides, their direct language condition cannot account for the goal-directed behavior of pre-verbal infants and strongly limits the expression of behavioral diversity for a given language input. To resolve these issues, we propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB). LGB decouples skill learning and language grounding via an intermediate semantic representation of the world. To showcase the properties of LGB, we present a specific implementation called DECSTR. DECSTR is an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical objects. In a first stage (G -> B), it freely explores its environment and targets self-generated semantic configurations. In a second stage (L -> G), it trains a language-conditioned goal generator to generate semantic goals that match the constraints expressed in language-based inputs. We showcase the additional properties of LGB w.r.t. both an end-to-end LC-RL approach and a similar approach leveraging non-semantic, continuous intermediate representations. Intermediate semantic representations help satisfy language commands in a diversity of ways, enable strategy switching after a failure and facilitate language grounding

    Language-Conditioned Goal Generation: a New Approach to Language Grounding in RL

    Get PDF
    In the real world, linguistic agents are also embodied agents: they perceive and act in the physical world. The notion of Language Grounding questions the interactions between language and embodiment: how do learning agents connect or ground linguistic representations to the physical world? This question has recently been approached by the Reinforcement Learning community under the framework of instruction-following agents. In these agents, behavioral policies or reward functions are conditioned on the embedding of an instruction expressed in natural language. This paper proposes another approach: using language to condition goal generators. Given any goal-conditioned policy, one could train a language-conditioned goal generator to generate language-agnostic goals for the agent. This method allows to decouple sensorimotor learning from language acquisition and enable agents to demonstrate a diversity of behaviors for any given instruction. We propose a particular instantiation of this approach and demonstrate its benefits

    Towards Teachable Autonomous Agents

    Get PDF
    Autonomous discovery and direct instruction are two extreme sources of learning in children, but educational sciences have shown that intermediate approaches such as assisted discovery or guided play resulted in better acquisition of skills. When turning to Artificial Intelligence, the above dichotomy can be translated into the distinction between autonomous agents, which learn in isolation from their own signals, and interactive learning agents which can be taught by social partners but generally lack autonomy. In between should stand teachable autonomous agents: agents that learn from both internal and teaching signals to benefit from the higher efficiency of assisted discovery processes. Designing such agents could result in progress in two ways. First, very concretely, it would offer a way to non-expert users in the real world to drive the learning behavior of agents towards their expectations. Second, more fundamentally, it might be a key step to endow agents with the necessary capabilities to reach general intelligence. The purpose of this paper is to elucidate the key obstacles standing in the way towards the design of such agents. We proceed in four steps. First, we build on a seminal work of Bruner to extract relevant features of the assisted discovery processes happening between a child and a tutor. Second, we highlight how current research on intrinsically motivated agents is paving the way towards teachable and autonomous agents. In particular, we focus on autotelic agents, i.e. agents equipped with forms of intrinsic motivations that enable them to represent, self-generate and pursue their own goals. We argue that such autotelic capabilities from the learner side are key in the discovery process. Third, we adopt a social learning perspective on the interaction between a tutor and a learner to highlight some components that are currently missing to these agents before they can be taught by ordinary people using natural pedagogy. Finally, we provide a list of specific research questions that emerge from the perspective of extending these agents with assisted learning capabilities

    Enseigner des agents autotéliques basés sur des prédicats

    No full text
    Dans la quête de concevoir des machines incarnées qui explorent leurs environnements en autonomie, découvrent des nouveaux comportements et apprennent des répertoires non-bornés de compétences, l'intelligence artificielle s'est longuement inspirée des domaines de psychologie du développement et des sciences cognitives qui étudient la capacité remarquable des humains à apprendre tout au long de leur vie. Ceci a donné naissance au domaine de la robotique du développement qui a pour but de concevoir des agents artificiels autonomes capables d'auto-organiser leurs trajectoires d'apprentissage en se basant sur leurs motivations intrinsèques. Ce domaine combine les processus d'exploration de but intrinsèquement motivés (IMGEPs) et l'apprentissage par renforcement (RL). Cette combinaison est connue sous le nom d'apprentissage par renforcement autotélique, où des agents autotéliques sont intrinsèquement motivés pour représenter, organiser et apprendre leurs propres buts. Naturellement, ces agents doivent démontrer de bonnes capacités d'exploration puisqu'ils ont besoin de découvrir physiquement les buts pour pouvoir les apprendre. Malheureusement, découvrir des comportements intéressants peut être compliqué, surtout dans les environnements d'exploration difficile où les signaux de récompenses sont parcimonieux, déceptifs ou contradictoires. Dans ces scénarios, la situation physique des agents semble insuffisante. Heureusement, la recherche en psychologie du développement et les sciences de l'éducation soulignent le rôle important des signaux socio-culturels dans le développement des enfants humains. Cette situation sociale améliore les capacités d'exploration des enfants, leur créativité et leur développement. Cependant, l'apprentissage par renforcement profond considère l'apprentissage social comme une imposition d'instructions aux agents, ce qui les prive de leur autonomie. Dans ce document, nous introduisons les agents autotéliques enseignables, une nouvelle famille de machines autonomes qui peuvent apprendre à la fois toutes seules et à travers des signaux sociaux externes. Nous formalisons cette famille en tant que processus d'exploration de but hybride (HGEPs), où les agents autotéliques sont augmentés d'un mécanisme d'internalisation leur permettant de rejouer les signaux sociaux et d'un sélecteur de source de buts pour demander activement de l'aide sociale. Ce document est organisé en deux parties. Dans la première partie, nous nous concentrons sur la conception d'agents autotéliques enseignables et nous essayons d'implémenter des propriétés qui faciliteraient l'interaction sociale. Notamment, nous introduisons les agents autotéliques basés sur les prédicats, une nouvelle famille d'agents autotéliques qui représentent leurs buts en utilisant des prédicats binaires spatiaux. Nous montrons que l'espace de représentation sémantique sous-jacent joue le rôle de pivot entre la représentation sensorimotrice et le langage, permettant un découplage entre l'apprentissage sensorimoteur et l'ancrage du langage. Nous étudions également la conception des politiques et des fonctions valeurs état-action et nous soutenons que la combinaison des réseaux de neurones graphiques (GNNs) et des buts en prédicats relationnels permet l'utilisation de schémas computationnels légers qui transfèrent bien entre les tâches. Dans la deuxième partie, nous formalisons les interactions sociales en tant que processus d'exploration de buts. Nous introduisons Help Me Explore (HME), un nouveau protocole d'interaction sociale où un partenaire social expert guide progressivement l'agent au-delà de sa zone de développement proximale (ZPD). L'agent choisit activement de lancer des requêtes à son partenaire social dès qu'il estime qu'il ne progresse plus sur les buts qu'il connait déjà. Il finit éventuellement par internaliser ces signaux sociaux, devient moins dépendant envers son partenaire social et arrive à maximiser son contrôle de son espace de buts.As part of the quest for designing embodied machines that autonomously explore their environments, discover new behaviors and acquire open-ended repertoire of skills, artificial intelligence has been taking long looks at the inspiring fields of developmental psychology and cognitive sciences which investigate the remarkable continuous and unbounded learning of humans. This gave birth to the field of developmental robotics which aims at designing autonomous artificial agents capable of self-organizing their own learning trajectories based on their intrinsic motivations. It bakes the developmental framework of intrinsically motivated goal exploration processes (IMGEPs) into reinforcement learning (RL). This combination has been recently introduced as autotelic reinforcement learning, where autotelic agents are intrinsically motivated to self-represent, self-organize and autonomously learn about their own goals. Naturally, such agents need to be endowed with good exploration capabilities as they need to first physically encounter a certain goal in order to take ownership of and learn about it. Unfortunately, discovering interesting behavior is usually tricky, especially in hard exploration setups where the rewarding signals are parsimonious, deceptive or adversarial. In such scenarios, the agents’ physical situatedness-in the Piagetian sense of the term-seems insufficient. Luckily, research in developmental psychology and education sciences have been praising the remarkable role of socio-cultural signals in the development of human children. This social situatedness-in the Vygotskyan sense of the term-enhances the toddlers’ exploration capabilities, creativity and development. However, deep \rl considers social interactions as dictating instructions to the agents, depriving them from their autonomy. This research introduces \textit{teachable autotelic agents}, a novel family of autonomous machines that can learn both alone and from external social signals. We formalize such a family as a hybrid goal exploration process (HGEPs), where autotelic agents are endowed with an internalization mechanism to rehearse social signals and with a goal source selector to actively query for social guidance. The present manuscript is organized in two parts. In the first part, we focus on the design of teachable autotelic agents and attempt to leverage the most important properties that would later serve the social interaction. Namely, we introduce predicate-based autotelic agents, a novel family of autotelic agents that represent their goals using spatial binary predicates. These insights were based on the Mandlerian view on the prelinguistic concept acquisition suggesting that toddlers are endowed with some innate mechanisms enabling them to translate spatio-temporal information into an iconic static form. We show that the underlying semantic representation plays a pivotal role between raw sensory inputs and language inputs, enabling the decoupling of sensorimotor learning and language grounding. We also investigate the design of such agents' policies and state-action value functions, and argue that combining Graph Neural Networks (GNNs) with relational predicates provides a light computational scheme to transfer efficiently between skills. In the second part, we formalize social interactions as a goal exploration process. We introduce Help Me Explore (HME), a novel social interaction protocol where an expert social partner progressively guides the learning agent beyond its zone of proximal development (ZPD). The agent actively selects to query its social partner whenever it estimates that it is not progressing enough alone. It eventually internalizes the social signals, becomes less dependent on its social partner and maximizes its control over its goal space

    Enseigner des agents autotéliques basés sur des prédicats

    No full text
    As part of the quest for designing embodied machines that autonomously explore their environments, discover new behaviors and acquire open-ended repertoire of skills, artificial intelligence has been taking long looks at the inspiring fields of developmental psychology and cognitive sciences which investigate the remarkable continuous and unbounded learning of humans. This gave birth to the field of developmental robotics which aims at designing autonomous artificial agents capable of self-organizing their own learning trajectories based on their intrinsic motivations. It bakes the developmental framework of intrinsically motivated goal exploration processes (IMGEPs) into reinforcement learning (RL). This combination has been recently introduced as autotelic reinforcement learning, where autotelic agents are intrinsically motivated to self-represent, self-organize and autonomously learn about their own goals. Naturally, such agents need to be endowed with good exploration capabilities as they need to first physically encounter a certain goal in order to take ownership of and learn about it. Unfortunately, discovering interesting behavior is usually tricky, especially in hard exploration setups where the rewarding signals are parsimonious, deceptive or adversarial. In such scenarios, the agents’ physical situatedness-in the Piagetian sense of the term-seems insufficient. Luckily, research in developmental psychology and education sciences have been praising the remarkable role of socio-cultural signals in the development of human children. This social situatedness-in the Vygotskyan sense of the term-enhances the toddlers’ exploration capabilities, creativity and development. However, deep \rl considers social interactions as dictating instructions to the agents, depriving them from their autonomy. This research introduces \textit{teachable autotelic agents}, a novel family of autonomous machines that can learn both alone and from external social signals. We formalize such a family as a hybrid goal exploration process (HGEPs), where autotelic agents are endowed with an internalization mechanism to rehearse social signals and with a goal source selector to actively query for social guidance. The present manuscript is organized in two parts. In the first part, we focus on the design of teachable autotelic agents and attempt to leverage the most important properties that would later serve the social interaction. Namely, we introduce predicate-based autotelic agents, a novel family of autotelic agents that represent their goals using spatial binary predicates. These insights were based on the Mandlerian view on the prelinguistic concept acquisition suggesting that toddlers are endowed with some innate mechanisms enabling them to translate spatio-temporal information into an iconic static form. We show that the underlying semantic representation plays a pivotal role between raw sensory inputs and language inputs, enabling the decoupling of sensorimotor learning and language grounding. We also investigate the design of such agents' policies and state-action value functions, and argue that combining Graph Neural Networks (GNNs) with relational predicates provides a light computational scheme to transfer efficiently between skills. In the second part, we formalize social interactions as a goal exploration process. We introduce Help Me Explore (HME), a novel social interaction protocol where an expert social partner progressively guides the learning agent beyond its zone of proximal development (ZPD). The agent actively selects to query its social partner whenever it estimates that it is not progressing enough alone. It eventually internalizes the social signals, becomes less dependent on its social partner and maximizes its control over its goal space.Dans la quête de concevoir des machines incarnées qui explorent leurs environnements en autonomie, découvrent des nouveaux comportements et apprennent des répertoires non-bornés de compétences, l'intelligence artificielle s'est longuement inspirée des domaines de psychologie du développement et des sciences cognitives qui étudient la capacité remarquable des humains à apprendre tout au long de leur vie. Ceci a donné naissance au domaine de la robotique du développement qui a pour but de concevoir des agents artificiels autonomes capables d'auto-organiser leurs trajectoires d'apprentissage en se basant sur leurs motivations intrinsèques. Ce domaine combine les processus d'exploration de but intrinsèquement motivés (IMGEPs) et l'apprentissage par renforcement (RL). Cette combinaison est connue sous le nom d'apprentissage par renforcement autotélique, où des agents autotéliques sont intrinsèquement motivés pour représenter, organiser et apprendre leurs propres buts. Naturellement, ces agents doivent démontrer de bonnes capacités d'exploration puisqu'ils ont besoin de découvrir physiquement les buts pour pouvoir les apprendre. Malheureusement, découvrir des comportements intéressants peut être compliqué, surtout dans les environnements d'exploration difficile où les signaux de récompenses sont parcimonieux, déceptifs ou contradictoires. Dans ces scénarios, la situation physique des agents semble insuffisante. Heureusement, la recherche en psychologie du développement et les sciences de l'éducation soulignent le rôle important des signaux socio-culturels dans le développement des enfants humains. Cette situation sociale améliore les capacités d'exploration des enfants, leur créativité et leur développement. Cependant, l'apprentissage par renforcement profond considère l'apprentissage social comme une imposition d'instructions aux agents, ce qui les prive de leur autonomie. Dans ce document, nous introduisons les agents autotéliques enseignables, une nouvelle famille de machines autonomes qui peuvent apprendre à la fois toutes seules et à travers des signaux sociaux externes. Nous formalisons cette famille en tant que processus d'exploration de but hybride (HGEPs), où les agents autotéliques sont augmentés d'un mécanisme d'internalisation leur permettant de rejouer les signaux sociaux et d'un sélecteur de source de buts pour demander activement de l'aide sociale. Ce document est organisé en deux parties. Dans la première partie, nous nous concentrons sur la conception d'agents autotéliques enseignables et nous essayons d'implémenter des propriétés qui faciliteraient l'interaction sociale. Notamment, nous introduisons les agents autotéliques basés sur les prédicats, une nouvelle famille d'agents autotéliques qui représentent leurs buts en utilisant des prédicats binaires spatiaux. Nous montrons que l'espace de représentation sémantique sous-jacent joue le rôle de pivot entre la représentation sensorimotrice et le langage, permettant un découplage entre l'apprentissage sensorimoteur et l'ancrage du langage. Nous étudions également la conception des politiques et des fonctions valeurs état-action et nous soutenons que la combinaison des réseaux de neurones graphiques (GNNs) et des buts en prédicats relationnels permet l'utilisation de schémas computationnels légers qui transfèrent bien entre les tâches. Dans la deuxième partie, nous formalisons les interactions sociales en tant que processus d'exploration de buts. Nous introduisons Help Me Explore (HME), un nouveau protocole d'interaction sociale où un partenaire social expert guide progressivement l'agent au-delà de sa zone de développement proximale (ZPD). L'agent choisit activement de lancer des requêtes à son partenaire social dès qu'il estime qu'il ne progresse plus sur les buts qu'il connait déjà. Il finit éventuellement par internaliser ces signaux sociaux, devient moins dépendant envers son partenaire social et arrive à maximiser son contrôle de son espace de buts

    Learning Object-Centered Autotelic Behaviors with Graph Neural Networks

    Full text link
    Although humans live in an open-ended world and endlessly face new challenges, they do not have to learn from scratch each time they face the next one. Rather, they have access to a handful of previously learned skills, which they rapidly adapt to new situations. In artificial intelligence, autotelic agents, which are intrinsically motivated to represent and set their own goals, exhibit promising skill adaptation capabilities. However, these capabilities are highly constrained by their policy and goal space representations. In this paper, we propose to investigate the impact of these representations on the learning and transfer capabilities of autotelic agents. We study different implementations of autotelic agents using four types of Graph Neural Networks policy representations and two types of goal spaces, either geometric or predicate-based. By testing agents on unseen goals, we show that combining object-centered architectures that are expressive enough with semantic relational goals helps learning to reach more difficult goals. We also release our graph-based implementations to encourage further research in this direction.Comment: 15 pages, 10 figures, published at the Conference on Lifelong Learning Agents COLLAS 202
    corecore